106 result(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv
Order by:

CNR Author operator: and / or
more
Typology operator: and / or
Language operator: and / or
Date operator: and / or
more
Rights operator: and / or
2023 Conference article Open Access OPEN
On-the-fly acquisition and rendering with low cost LiDAR and RGB cameras for marine navigation
Dutta S., Ganovelli F., Cignoni P.
This paper describes a hardware/software system, dubbed NausicaaVR, for acquiring and rendering 3D environments in the context of marine navigation. Like other similar work, it focuses on system calibration and rendering but the specific context poses new and more difficult challenges for the development when compared to the classic automotive scenario. We provide a comprehensive description of all the components of the system, explicitly reporting on encountered problems and subtle choices to overcome those, in an attempt to render an insightful picture of how this and similar systems are built.Source: GISTAM 2023 - 9th International Conference on Geographical Information Systems Theory, Applications and Management, pp. 176–183, Prague, Czech Republic, 25-27/04/2023
DOI: 10.5220/0011855000003473
Metrics:


See at: ISTI Repository Open Access | www.scitepress.org Open Access | doi.org Restricted | CNR ExploRA


2021 Conference article Open Access OPEN
Evaluating deep learning methods for low resolution point cloud registration in outdoor scenarios
Siddique A., Corsini M., Ganovelli F. And Cignoni P.
Point cloud registration is a fundamental task in 3D reconstruction and environment perception. We explore the performance of modern Deep Learning-based registration techniques, in particular Deep Global Registration (DGR) and Learning Multi-view Registration (LMVR), on an outdoor real world data consisting of thousands of range maps of a building acquired by a Velodyne LIDAR mounted on a drone. We used these pairwise registration methods in a sequential pipeline to obtain an initial rough registration. The output of this pipeline can be further globally refined. This simple registration pipeline allow us to assess if these modern methods are able to deal with this low quality data. Our experiments demonstrated that, despite some design choices adopted to take into account the peculiarities of the data, more work is required to improve the results of the registration.Source: STAG 2021 - Eurographics Italian Chapter Conference, pp. 187–191, Online Conference, 28-29/10/2021
DOI: 10.2312/stag.20211489
Project(s): EVOCATION via OpenAIRE, ENCORE via OpenAIRE
Metrics:


See at: diglib.eg.org Open Access | ISTI Repository Open Access | CNR ExploRA


2021 Conference article Open Access OPEN
A deep learning method for frame selection in videos for structure from motion pipelines
Banterle F., Gong R., Corsini M., Ganovelli F., Van Gool L., Cignoni P.
Structure-from-Motion (SfM) using the frames of a video sequence can be a challenging task because there is a lot of redundant information, the computational time increases quadratically with the number of frames, there would be low-quality images (e.g., blurred frames) that can decrease the final quality of the reconstruction, etc. To overcome all these issues, we present a novel deep-learning architecture that is meant for speeding up SfM by selecting frames using predicted sub-sampling frequency. This architecture is general and can learn/distill the knowledge of any algorithm for selecting frames from a video for generating high-quality reconstructions. One key advantage is that we can run our architecture in real-time saving computations while keeping high-quality results.Source: ICIP 2021 - 28th IEEE International Conference on Image Processing, pp. 3667–3671, Anchorage, Alaska, USA, 19-22/09/2021
DOI: 10.1109/icip42928.2021.9506227
Project(s): ENCORE via OpenAIRE
Metrics:


See at: ISTI Repository Open Access | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2021 Contribution to conference Open Access OPEN
Proceedings - Web3D 2021: 26th ACM International Conference on 3D Web Technology
Ganovelli F., Mc Donald C., Banterle F., Potenziani M., Callieri M., Jung Y.
The annual ACM Web3D Conference is a major event which unites researchers, developers, entrepreneurs, experimenters, artists and content creators in a dynamic learning environment. Attendees share and explore methods of using, enhancing and creating new 3D Web and Multimedia technologies such as X3D, VRML, Collada, MPEG family, U3D, Java3D and other technologies. The conference also focuses on recent trends in interactive 3D graphics, information integration and usability in the wide range of Web3D applications from mobile devices to high-end immersive environments.Source: New York: ACM, Association for computing machinery, 2021
DOI: 10.1145/3485444
Metrics:


See at: dl.acm.org Open Access | ISTI Repository Open Access | CNR ExploRA


2020 Journal article Closed Access
Turning a Smartphone Selfie into a Studio Portrait
Capece N., Banterle F., Cignoni P., Ganovelli F., Erra U., Potel M.
We introduce a novel algorithm that turns a flash selfie taken with a smartphone into a studio-like photograph with uniform lighting. Our method uses a convolutional neural network trained on a set of pairs of photographs acquired in a controlled environment. For each pair, we have one photograph of a subject's face taken with the camera flash enabled and another one of the same subject in the same pose illuminated using a photographic studio-lighting setup. We show how our method can amend lighting artifacts introduced by a close-up camera flash, such as specular highlights, shadows, and skin shine.Source: IEEE computer graphics and applications 40 (2020): 140–147. doi:10.1109/MCG.2019.2958274
DOI: 10.1109/mcg.2019.2958274
Metrics:


See at: IEEE Computer Graphics and Applications Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2020 Contribution to conference Open Access OPEN
Automatic 3D Reconstruction of Structured Indoor Environments
Pintore G., Mura C., Ganovelli F., Fuentes-Perez L., Pajarola R., Gobbetti E.
Creating high-level structured 3D models of real-world indoor scenes from captured data is a fundamental task which has important applications in many fields. Given the complexity and variability of interior environments and the need to cope with noisy and partial captured data, many open research problems remain, despite the substantial progress made in the past decade. In this tutorial, we provide an up-to-date integrative view of the field, bridging complementary views coming from computer graphics and computer vision. After providing a characterization of input sources, we define the structure of output models and the priors exploited to bridge the gap between imperfect sources and desired output. We then identify and discuss the main components of a structured reconstruction pipeline, and review how they are combined in scalable solutions working at the building level. We finally point out relevant research issues and analyze research trends.Source: SIGGRAPH '20 - ACM SIGGRAPH 2020 Courses, Online Conference, August 24-28, 2020
DOI: 10.1145/3388769.3407469
DOI: 10.5167/uzh-190473
Project(s): EVOCATION via OpenAIRE, ENCORE via OpenAIRE
Metrics:


See at: CRS4 Open Archive Open Access | Zurich Open Repository and Archive Open Access | www.zora.uzh.ch Open Access | dl.acm.org Restricted | doi.org Restricted | doi.org Restricted | CNR ExploRA


2020 Journal article Open Access OPEN
State-of-the-art in Automatic 3D Reconstruction of Structured Indoor Environments
Pintore G., Mura C., Ganovelli F., Fuentes-Perez L., Pajarola R., Gobbetti E.
Creating high-level structured 3D models of real-world indoor scenes from captured data is a fundamental task which has important applications in many fields. Given the complexity and variability of interior environments and the need to cope with noisy and partial captured data, many open research problems remain, despite the substantial progress made in the past decade. In this survey, we provide an up-to-date integrative view of the field, bridging complementary views coming from computer graphics and computer vision. After providing a characterization of input sources, we define the structure of output models and the priors exploited to bridge the gap between imperfect sources and desired output. We then identify and discuss the main components of a structured reconstruction pipeline, and review how they are combined in scalable solutions working at the building level. We finally point out relevant research issues and analyze research trends.Source: Computer graphics forum (Print) 39 (2020): 667–699. doi:10.1111/cgf.14021
DOI: 10.1111/cgf.14021
DOI: 10.5167/uzh-190475
Project(s): EVOCATION via OpenAIRE, ENCORE via OpenAIRE
Metrics:


See at: CRS4 Open Archive Open Access | Computer Graphics Forum Restricted | Zurich Open Repository and Archive Restricted | onlinelibrary.wiley.com Restricted | CNR ExploRA


2019 Report Open Access OPEN
SOROS: Sciadro online reconstruction by odometry and stereo-matching
Ganovelli F., Malomo L., Scopigno R.
In this report we show how to interactively create 3D models for scenes seen by a common off-the-shelf smartphone. Our approach combines Visual Odometry with IMU sensors in order to achieve interactive 3D reconstruction of the scene as seen from the camera.Source: ISTI Technical reports, 2019

See at: ISTI Repository Open Access | CNR ExploRA


2019 Journal article Open Access OPEN
DeepFlash: turning a flash selfie into a studio portrait
Capece N., Banterle F., Cignoni P., Ganovelli F., Scopigno R., Erra U.
We present a method for turning a flash selfie taken with a smartphone into a photograph as if it was taken in a studio setting with uniform lighting. Our method uses a convolutional neural network trained on a set of pairs of photographs acquired in an ad-hoc acquisition campaign. Each pair consists of one photograph of a subject's face taken with the camera flash enabled and another one of the same subject in the same pose illuminated using a photographic studio-lighting setup. We show how our method can amend defects introduced by a close-up camera flash, such as specular highlights, shadows, skin shine, and flattened images.Source: Signal processing. Image communication 77 (2019): 28–39. doi:10.1016/j.image.2019.05.013
DOI: 10.1016/j.image.2019.05.013
DOI: 10.48550/arxiv.1901.04252
Metrics:


See at: arXiv.org e-Print Archive Open Access | Signal Processing Image Communication Open Access | ISTI Repository Open Access | Signal Processing Image Communication Restricted | doi.org Restricted | CNR ExploRA


2019 Journal article Open Access OPEN
Automatic modeling of cluttered multi-room floor plans from panoramic images
Pintore G., Ganovelli F., Villanueva A. J., Gobbetti E.
We present a novel and light-weight approach to capture and reconstruct structured 3D models of multi-room floor plans. Starting from a small set of registered panoramic images, we automatically generate a 3D layout of the rooms and of all the main objects inside. Such a 3D layout is directly suitable for use in a number of real-world applications, such as guidance, location, routing, or content creation for security and energy management. Our novel pipeline introduces several contributions to indoor reconstruction from purely visual data. In particular, we automatically partition panoramic images in a connectivity graph, according to the visual layout of the rooms, and exploit this graph to support object recovery and rooms boundaries extraction. Moreover, we introduce a plane-sweeping approach to jointly reason about the content of multiple images and solve the problem of object inference in a top-down 2D domain. Finally, we combine these methods in a fully automated pipeline for creating a structured 3D model of a multi-room floor plan and of the location and extent of clutter objects. These contribution make our pipeline able to handle cluttered scenes with complex geometry that are challenging to existing techniques. The effectiveness and performance of our approach is evaluated on both real-world and synthetic models.Source: Computer graphics forum (Print) 38 (2019): 347–358. doi:10.1111/cgf.13842
DOI: 10.1111/cgf.13842
Metrics:


See at: ISTI Repository Open Access | Computer Graphics Forum Restricted | onlinelibrary.wiley.com Restricted | CNR ExploRA


2018 Journal article Open Access OPEN
Scalable non-rigid registration for multi-view stereo data
Palma G., Boubekeur T., Ganovelli F., Cignoni P.
We propose a new non-rigid registration method for large 3D meshes from Multi-View Stereo (MVS) reconstruction characterized by low-frequency shape deformations induced by several factors, such as low sensor quality and irregular sampling object coverage. Starting from a reference model to which we want to align a new 3D mesh, our method starts by decomposing it in patches using a Lloyd clustering before running an ICP local registration for each patch. Then, we improve the alignment using few geometric constraints and finally, we build a global deformation function that blends the estimated per-patch transformations. This function is structured on top of a deformation graph derived from the dual graph of the clustering. Our algorithm is iterated until convergence, increasing progressively the number of patches in the clustering to capture smaller deformations. The method comes with a scalable multicore implementation that enables, for the first time, the alignment of meshes made of tens of millions of triangles in a few minutes. We report extensive experiments of our algorithm on several dense Multi-View Stereo models, using a 3D scan or another MVS reconstruction as reference. Beyond MVS data, we also applied our algorithm to different scenarios, exhibiting more complex and larger deformations, such as 3D motion capture dataset or 3D scans of dynamic objects. The good alignment results obtained for both datasets highlights the efficiency and the flexibility of our approach.Source: ISPRS journal of photogrammetry and remote sensing 142 (2018): 328–341. doi:10.1016/j.isprsjprs.2018.06.012
DOI: 10.1016/j.isprsjprs.2018.06.012
Metrics:


See at: ISTI Repository Open Access | ISPRS Journal of Photogrammetry and Remote Sensing Restricted | www.sciencedirect.com Restricted | CNR ExploRA


2018 Conference article Open Access OPEN
Recovering 3D indoor floorplans by exploiting low-cost spherical photography
Pintore G., Ganovelli F., Pintus R., Scopigno R., Gobbetti E.
We present a vision-based approach to automatically recover the 3D \emph{existing-conditions} information of an indoor structure, starting from a small set of overlapping spherical images. The recovered 3D model includes the \emph{as-built} 3D room layout with the position of important functional elements located on room boundaries. We first recover the underlying 3D structure as interconnected rooms bounded by walls. This is done by combining geometric reasoning under an Augmented Manhattan World model and Structure-from-Motion. Then, we create, from the original registered spherical images, 2D rectified and metrically scaled images of the room boundaries. Using those undistorted images and the associated 3D data, we automatically detect the 3D position and shape of relevant wall-, floor-, and ceiling-mounted objects, such as electric outlets, light switches, air-vents and light points. As a result, our system is able to quickly and automatically draft an as-built model coupled with its existing conditions using only commodity mobile devices. We demonstrate the effectiveness and performance of our approach on real-world indoor scenes and publicly available datasets.Source: Pacific Graphics, Hong Kong, 8-11 October 2018

See at: ISTI Repository Open Access | vcg.isti.cnr.it Open Access | CNR ExploRA


2018 Journal article Open Access OPEN
Recovering 3D existing-conditions of indoor structures from spherical images
Pintore G., Pintus R., Ganovelli F., Scopigno R., Gobbetti E.
We present a vision-based approach to automatically recover the 3D existing-conditions information of an indoor structure, starting from a small set of overlapping spherical images. The recovered 3D model includes the as-built 3D room layout with the position of important functional elements located on room boundaries. We first recover the underlying 3D structure as interconnected rooms bounded by walls. This is done by combining geometric reasoning under an Augmented Manhattan World model and Structure-from-Motion. Then, we create, from the original registered spherical images, 2D rectified and metrically scaled images of the room boundaries. Using those undistorted images and the associated 3D data, we automatically detect the 3D position and shape of relevant wall-, floor-, and ceiling-mounted objects, such as electric outlets, light switches, air-vents and light points. As a result, our system is able to quickly and automatically draft an as-built model coupled with its existing conditions using only commodity mobile devices. We demonstrate the effectiveness and performance of our approach on real-world indoor scenes and publicly available datasets.Source: Computers & graphics 77 (2018): 16–29. doi:10.1016/j.cag.2018.09.013
DOI: 10.1016/j.cag.2018.09.013
Metrics:


See at: ISTI Repository Open Access | Computers & Graphics Restricted | www.sciencedirect.com Restricted | CNR ExploRA


2018 Journal article Open Access OPEN
3D floor plan recovery from overlapping spherical images
Pintore G., Ganovelli F., Pintus R., Scopigno R., Gobbetti E.
We present a novel approach to automatically recover, from a small set of partially overlapping spherical images, an indoor structure representation in terms of a 3D floor plan registered with a set of 3D environment maps. % We introduce several improvements over previous approaches based on color/spatial reasoning exploiting \emph{Manhattan World} priors. In particular, we introduce a new method for geometric context extraction based on a 3D facets representation, which combines color distribution analysis of individual images with sparse multi-view clues. Moreover, we introduce an efficient method to combine the facets from different points of view in a single consistent model, considering the reliability of the facets contribution. The resulting capture and reconstruction pipeline automatically generates 3D multi-room environments where most of the other previous approaches fail, such as in presence of hidden corners and large clutter, even without involving additional dense 3D data or tools. % We demonstrate the effectiveness and performance of our approach on different real-world indoor scenes. Our test data will be released to allow for further studies and comparisons.Source: Computational visual media (Beijing. Print) 4 (2018): 367–383. doi:10.1007/s41095-018-0125-9
DOI: 10.1007/s41095-018-0125-9
Metrics:


See at: Computational Visual Media Open Access | Computational Visual Media Open Access | Computational Visual Media Open Access | ISTI Repository Open Access | ISTI Repository Open Access | CNR ExploRA


2018 Conference article Open Access OPEN
Reconstructing power lines from images
Ganovelli F., Malomo L., Scopigno R.
We present a method for reconstructing overhead power lines from images. The solution to this problem has a deep impact over the strategies adopted to monitor the many thousand of kilometers of power lines where nowadays the only effective solution requires a high-end laser scanner. The difficulty with image based algorithms is that images of wires of the power lines typically do not have point features to match among different images. We use a Structure from Motion algorithm to retrieve the approximate camera poses and then formulate a minimization problem aimed to refine the camera poses so that the image of the wires project consistently on a 3D hypothesis.Source: IVCNZ 2018 - International Conference on Image and Vision Computing New Zealand, Auckland, New Zealand, 19-21 November 2018
DOI: 10.1109/ivcnz.2018.8634765
Metrics:


See at: ISTI Repository Open Access | doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2017 Journal article Open Access OPEN
Presentation of 3D scenes through video example
Baldacci A., Ganovelli F., Corsini M., Scopigno R.
Using synthetic videos to present a 3D scene is a common requirement for architects, designers, engineers or Cultural Heritage professionals however it is usually time consuming and, in order to obtain high quality results, the support of a film maker/computer animation expert is necessary. We introduce an alternative approach that takes the 3D scene of interest and an example video as input, and automatically produces a video of the input scene that resembles the given video example. In other words, our algorithm allows the user to "replicate" an existing video, on a different 3D scene. We build on the intuition that a video sequence of a static environment is strongly characterized by its optical flow, or, in other words, that two videos are similar if their optical flows are similar. We therefore recast the problem as producing a video of the input scene whose optical flow is similar to the optical flow of the input video. Our intuition is supported by a user-study specifically designed to verify this statement. We have successfully tested our approach on several scenes and input videos, some of which are reported in the accompanying material of this paper.Source: IEEE transactions on visualization and computer graphics 23 (2017): 2096–2107. doi:10.1109/TVCG.2016.2608828
DOI: 10.1109/tvcg.2016.2608828
Metrics:


See at: ISTI Repository Open Access | IEEE Transactions on Visualization and Computer Graphics Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA


2017 Conference article Restricted
Mobile metric capture and reconstruction in indoor environments
Pintore G., Ganovelli F., Scopigno R., Gobbetti E.
Mobile devices have become progressively more attractive for solving environment sensing problems. Thanks to their multi-modal acquisition capabilities and their growing processing power, they can perform increasingly sophisticated computer vision and data fusion tasks. In this context, we summarize our recent advances in the acquisition and reconstruction of indoor structures, describing the evolution of the methods from current single-view approaches to novel mobile multi-view methodologies. Starting from an overview on the features and capabilities of current hardware (ranging from commodity smartphones to recent 360 degree cameras), we present in details specific real-world cases which exploit modern devices to acquire structural, visual and metric information.Source: SIGGRAPH Asia Symposium on Mobile Graphics and Interactive Applications, Bangkok, Thailandia, 27-30 November 2017
DOI: 10.1145/3132787.3139202
Metrics:


See at: doi.org Restricted | www.crs4.it Restricted | CNR ExploRA


2016 Journal article Embargo
3D reconstruction for featureless scenes with curvature hints
Baldacci A., Bernabei D., Corsini M., Ganovelli F., Scopigno R.
We present a novel interactive framework for improving 3D reconstruction starting from incomplete or noisy results obtained through image-based reconstruction algorithms. The core idea is to enable the user to provide localized hints on the curvature of the surface, which are turned into constraints during an energy minimization reconstruction. To make this task simple, we propose two algorithms. The first is a multi-view segmentation algorithm that allows the user to propagate the foreground selection of one or more images both to all the images of the input set and to the 3D points, to accurately select the part of the scene to be reconstructed. The second is a fast GPU-based algorithm for the reconstruction of smooth surfaces from multiple views, which incorporates the hints provided by the user. We show that our framework can turn a poor-quality reconstruction produced with state of the art image-based reconstruction methods into a high- quality one.Source: The visual computer 32 (2016): 1605–1620. doi:10.1007/s00371-015-1144-5
DOI: 10.1007/s00371-015-1144-5
Project(s): HARVEST4D via OpenAIRE
Metrics:


See at: The Visual Computer Restricted | link.springer.com Restricted | CNR ExploRA


2016 Conference article Unknown
Assessing the security of buildings: a virtual studio solution
Ahmad A., Balet O., Boin A., Castet J., Donnelley M., Ganovelli F., Kokkinis G., Pintore G.
This paper presents an innovative IT solution, a virtual studio, enabling security professionals to formulate, test and adjust security measures to enhance the security of critical buildings. The concept is to virtualize the environment, enabling experts to examine and assess and improve on a buildingâEUR(TM)s security in a cost-effective and risk-free way. Our virtual studio solution makes use of the latest advances in computer graphics to reconstruct accurate blueprints as well as 3D representations of entire buildings in a very short timeframe. In addition, our solution enables the creation and simulation of multiple threat situations, allowing users to assess security procedures and various responses. Furthermore, we present a novel device, tailored to support collaborative security planning needs. Security experts from various disciplines evaluated our virtual studio solution, and their analysis is presented in this paper.Source: International Conference on Information Systems for Crisis Response and Management, Rio de Janeiro, Brazil, 22-25 May 2016
Project(s): VASCO via OpenAIRE

See at: CNR ExploRA


2016 Conference article Restricted
Omnidirectional image capture on mobile devices for fast automatic generation of 2.5D indoor maps
Pintore G., Garro V., Ganovelli F., Gobbetti E., Agus M.
We introduce a light-weight automatic method to quickly capture and recover 2.5D multi-room indoor environments scaled to real-world metric dimensions. To minimize the user effort required, we capture and analyze a single omni-directional image per room using widely available mobile devices. Through a simple tracking of the user movements between rooms, we iterate the process to map and reconstruct entire floor plans. In order to infer 3D clues with a minimal processing and without relying on the presence of texture or detail, we define a specialized spatial transform based on catadioptric theory to highlight the room's structure in a virtual projection. From this information, we define a parametric model of each room to formalize our problem as a global optimization solved by Levenberg-Marquardt iterations. The effectiveness of the method is demonstrated on several challenging real-world multi-room indoor scenes.Source: IEEE Winter Conference on Applications of Computer Vision, Lake Placid, NY, USA, 7-10 March 2016
DOI: 10.1109/wacv.2016.7477631
Project(s): VASCO via OpenAIRE
Metrics:


See at: doi.org Restricted | ieeexplore.ieee.org Restricted | CNR ExploRA